-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: add tests for meps dataset #38
Conversation
Also allowed calling train_model.main with arguments (will still use sys.argv when no arguments are supplied in the function call
Fixed error in plotting where non-callable cartopy projection from Config was called used current mesh generation from neural-lam instead of weather-model-graphs finished test of training call
…t will be installed anyway as a dependency during the other pip installs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me! It will be really nice to have this automated testing on a real dataset. What do you think to creating a file called DEVELOPING.md
in the repo root where you detail the contents of the test data a bit more?
@sadamov could you give this a read through too please?
- removed linting dependencies - minor changes to test file - added notebook outlining generation of meps_example_reduced from meps_example
I added this as a notebook instead, since I could then include the code directly. Let me know if you would still prefer just a md file :) |
Could we move this and all future notebooks to a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice @SimonKamuk, we definitely needed tests! Well done 🚀
I tested this with a push to mllam-repo and all tests run smoothly.
- I think you should add a sentence or two about the new testing to https://github.com/mllam/neural-lam/blob/main/README.md#development-and-contributing
- These tests take 150s (with download) and 100s(without) on my machine. I think that's okay since they mostly run in a seperate GitHub runner on push. In the future we could also use Mock objects for long-running tests (e.g.
@mock.patch('train_model.main')
)
Otherwise, I only requested some smaller changes. Thanks again Simon!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only have two minor comments about comments you could add to the code. Otherwise this is good to go.
Remember to update the changelog to say that you have added testing of dataset loading, graph creation and training based on reduced MEPS dataset stored on AWS S3! This is a big step!!
Also I think it would be nice to add to the top of the README cicd badges for the testing if @sadamov and @joeloskarsson you are ok with that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very valuable to have! I had some questions about the added dependencies in requirements.py
and also some follow-ups to earlier discussions that might be good to take a look at.
I have no opinion on cicd badges, so please go ahead if you'd like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we just need to remove pytorch-geometric
from the requirements.txt and then this is good to go!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All looking good now I think. I have only read the code, but trust that others have tested that everything runs. Great work on this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ready!
@sadamov Do you have any further requests or are we ready to merge? :) |
Ready go merge! 🚀 |
@SimonKamuk at the moment I am seeing "Unit Tests: Failing" on the badge, even thought this is not the case for the main branch. I was looking at https://github.com/mllam/neural-lam/actions/workflows/run_tests.yml and it seems to be on some other branch they have failed. I really don't know much about how these badges work, so not sure how it is supposed to function 😅 Do you know if there is a way to tweak it so that it only applies to the main branch? Or would that require large changes to the way the github workflows are set up? |
Yeah, I noticed that as well. I could specify that the badge should only refer to the main branch, but the issue is that the workflows are not running on pushes to main, so there are no runs to refer to. In the recent commit, I made these changes, but I am not sure what the implication of no longer ignoring pushes to main (why were they ignored in the first place @leifdenby?) |
Ok, that's great! I guess running on push to main is generally unnecessary since we run everything before merging. However I don't think it matters if we run it an extra time just for the badge. |
Implemeted tests for loading a reduced size meps example dataset, creating graphs, and training model.
Work TODO:
meps_example_reduced
neural_lam.weather_dataset.WeatherDataset
from downloaded datacloses #30